今天總算要來建立後續使用的 k8s cluster 了!
kubeadm
安裝 3 台 control planekube-vip
建立 Loadbalancercilium
# 產生預設的設定
kubeadm config print init-defaults >> kubeadm-config.yaml
產生預設的設定檔後,修改以下內容:
advertiseAddress
:改為第一台的 master node IPbootstrapTokens
的內容controlPlaneEndpoint
欄位:改為 kube-vip
的 VIP 與 portname
欄位移除kubeadm-config.yaml
內容參考如下:
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.75.11
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
imagePullSerial: true
taints: null
timeouts:
controlPlaneComponentHealthCheck: 4m0s
discovery: 5m0s
etcdAPICall: 2m0s
kubeletHealthCheck: 4m0s
kubernetesAPICall: 1m0s
tlsBootstrap: 5m0s
upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
controlPlaneEndpoint: "192.168.75.10:6443"
kubernetesVersion: 1.31.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
proxy: {}
scheduler: {}
補充:在
ClusterConfiguration
的欄位中,有caCertificateValidityPeriod
,certificateValidityPeriod
這兩項
這兩項是在 1.31 版本才加入的,可以調整kubeadm
簽發的憑證時間,預設 ca 10 年,一般憑證 1 年
這個參數之前不能調整,也導致不少環境要特別去注意說憑證的時間 (可以調整真是太好了!)。
修改完後,也確定 kube-vip.yaml
有放在 /etc/kubernetes/manifests
後,即可執行:
sudo kubeadm init --config=./kubeadm-config.yaml
執行完畢後,就可以加入其他節點,以下提供一個簡單的 bash script 用來產生 join command:
#!/bin/bash
# 檢查是否輸入了必要的參數
if [ "$#" -lt 1 ]; then
echo "Usage: $0 <master/worker> [master_node_IP]"
echo "Example: $0 master 192.168.1.205"
echo "Example: $0 worker"
exit 1
fi
NODE_TYPE=$1
# 將 NODE_TYPE 轉換為小寫
NODE_TYPE=$(echo "$NODE_TYPE" | tr '[:upper:]' '[:lower:]')
# 檢查 NODE_TYPE 是否為 master 或 worker
if [ "$NODE_TYPE" != "master" ] && [ "$NODE_TYPE" != "worker" ]; then
echo "Error: The first argument must be either 'master' or 'worker'."
exit 1
fi
# 如果是 master,檢查是否提供了 master_node_IP
if [ "$NODE_TYPE" == "master" ] && [ "$#" -ne 2 ]; then
echo "Error: For 'master', you must provide the master_node_IP."
echo "Usage: $0 master <master_node_IP>"
echo "Example: $0 master 192.168.1.205"
exit 1
fi
MASTER_NODE_IP=$2
# 生成 artifact 目錄
ARTIFACT_DIR="./artifact"
mkdir -p ${ARTIFACT_DIR}
# 生成 join 命令
JOIN_COMMAND=$(kubeadm token create --print-join-command)
# 如果是 master,則生成 certificate key 並加入 --apiserver-advertise-address 參數
if [ "$NODE_TYPE" == "master" ]; then
CERTIFICATE_KEY=$(kubeadm init phase upload-certs --upload-certs | grep -A1 'Using certificate key:' | tail -n1)
JOIN_COMMAND="${JOIN_COMMAND} --control-plane --certificate-key ${CERTIFICATE_KEY} --apiserver-advertise-address ${MASTER_NODE_IP}"
fi
# 將 join 命令寫入文件
JOIN_COMMAND_FILE="${ARTIFACT_DIR}/kubeadm_join_${NODE_TYPE}.sh"
echo "#!/bin/bash" > ${JOIN_COMMAND_FILE}
echo ${JOIN_COMMAND} >> ${JOIN_COMMAND_FILE}
# 給文件添加執行權限
chmod +x ${JOIN_COMMAND_FILE}
# 輸出文件路徑
echo "The kubeadm join command has been saved to ${JOIN_COMMAND_FILE}"
此腳本分成產生 worker 與 master 的兩種指令,用法大致如下:
Usage: ./generate_kubeadm_join.sh <master/worker> [master_node_IP]
sudo ./generate_kubeadm_join.sh master 192.168.75.12
# 範例輸出:
kubeadm join 192.168.75.10:6443 --token tjiupg.zrzee1mc054fis3w --discovery-token-ca-cert-hash sha256:99c297f675d387686b20b7ddcd0bc182a478031e0031f0a5f5ceb8e034be3f73 --control-plane --certificate-key b8d5c09138933ba2e0628faecb699eeac294822eac91cefe15d2ed48b4c6a3ad --apiserver-advertise-address 192.168.75.12
特別要注意在 kube-vip 的部分,因為 super-admin.conf 僅有 master1 有,因此其他兩台 master 使用原本的 admin.conf 即可 (也就是把最後 sed 指令拿掉)
因為預計把這三台同時當作 worker node 使用,因此先把 control-plane 的 taint 拿掉:
kubectl taint node k8s-master1 k8s-master2 k8s-master3 node-role.kubernetes.io/control-plane-
接著安裝 CNI,就可以正常使用 k8s cluster 囉:
# 到 https://github.com/cilium/cilium-cli/releases/ 網站找需要的
wget https://github.com/cilium/cilium-cli/releases/download/v0.16.16/cilium-linux-amd64.tar.gz
tar xzvf cilium-linux-amd64.tar.gz
sudo mv cilium /usr/local/bin
cilium install
安裝完畢之後,就可以看到所有節點皆為 Ready 狀態了:
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 53m v1.31.0
k8s-master2 Ready control-plane 39m v1.31.0
k8s-master3 Ready control-plane 39m v1.31.0
拖了不少天才建立了一座 k8s cluster,接下來就可以利用這座 cluster 來測試一些文件內有趣的內容囉~
https://github.com/kube-vip/kube-vip/issues/684
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/